Goto

Collaborating Authors

 telepresence robot


NarraGuide: an LLM-based Narrative Mobile Robot for Remote Place Exploration

Hu, Yaxin, Sato, Arissa J., Du, Jingxin, Ye, Chenming, Zhu, Anjun, Praveena, Pragathi, Mutlu, Bilge

arXiv.org Artificial Intelligence

Robotic telepresence enables users to navigate and experience remote environments. However, effective navigation and situational awareness depend on users' prior knowledge of the environment, limiting the usefulness of these systems for exploring unfamiliar places. We explore how integrating location-aware LLM-based narrative capabilities into a mobile robot can support remote exploration. We developed a prototype system, called NarraGuide, that provides narrative guidance for users to explore and learn about a remote place through a dialogue-based interface. We deployed our prototype in a geology museum, where remote participants (n=20) used the robot to tour the museum. Our findings reveal how users perceived the robot's role, engaged in dialogue in the tour, and expressed preferences for bystander encountering. Our work demonstrates the potential of LLM-enabled robotic capabilities to deliver location-aware narrative guidance and enrich the experience of exploring remote environments.


Deformation of the panoramic sphere into an ellipsoid to induce self-motion in telepresence users

Laukka, Eetu, Center, Evan G., Ojala, Timo, LaValle, Steven M., Pouke, Matti

arXiv.org Artificial Intelligence

Mobile telepresence robots allow users to feel present and explore remote environments using technology. Traditionally, these systems are implemented using a camera onboard a mobile robot that can be controlled. Although high-immersion technologies, such as 360-degree cameras, can increase situational awareness and presence, they also introduce significant challenges. Additional processing and bandwidth requirements often result in latencies of up to seconds. The current delay with a 360-degree camera streaming over the internet makes real-time control of these systems difficult. Working with high-latency systems requires some form of assistance to the users. This study presents a novel way to utilize optical flow to create an illusion of self-motion to the user during the latency period between user sending motion commands to the robot and seeing the actual motion through the 360-camera stream. We find no significant benefit of using the self-motion illusion to performance or accuracy of controlling a telepresence robot with a latency of 500 ms, as measured by the task completion time and collisions into objects. Some evidence is shown that the method might increase virtual reality (VR) sickness, as measured by the simulator sickness questionnaire (SSQ). We conclude that further adjustments are necessary in order to render the method viable.


Social and Telepresence Robots for Accessibility and Inclusion in Small Museums

Balossino, Nello, Damiano, Rossana, Gena, Cristina, Lillo, Alberto, Marras, Anna Maria, Mattutino, Claudio, Pizzo, Antonio, Prin, Alessia, Vernero, Fabiana

arXiv.org Artificial Intelligence

-- There are still many museums that present accessibility barriers, particularly regarding perceptual, cultural, and cognitive aspects. This is especially evident in low-density population areas. The aim of the ROBSO-PM project is to to improve the accessibility of small museums with the use of social robots and social telepresence robots, focusing on three museums as a case study: the Museum of the Holy Shroud in T urin, a small but globally known institution, and two lesser-known mountain museums, the Museum of the Champlas du Col Carnival, and the Pragelato Museum of Alpine Peoples' Costumes and Traditions. The project explores two main applications for robots: as guides to support inclusive visits for foreign or disabled visitors, and as telepresence tools allowing people with limited mobility to access museums remotely. From a research perspective, key topics include storytelling, robot personality, empathy, personalization, and, in the case of telepresence, collaboration between the robot and the person, with clearly defined roles and autonomy.


Tangles: Unpacking Extended Collision Experiences with Soma Trajectories

Benford, Steve, Garrett, Rachael, Li, Christine, Tennent, Paul, Núñez-Pacheco, Claudia, Kucukyilmaz, Ayse, Tsaknaki, Vasiliki, Höök, Kristina, Caleb-Solly, Praminda, Marshall, Joe, Schneiders, Eike, Popova, Kristina, Afana, Jude

arXiv.org Artificial Intelligence

We reappraise the idea of colliding with robots, moving from a position that tries to avoid or mitigate collisions to one that considers them an important facet of human interaction. We report on a soma design workshop that explored how our bodies could collide with telepresence robots, mobility aids, and a quadruped robot. Based on our findings, we employed soma trajectories to analyse collisions as extended experiences that negotiate key transitions of consent, preparation, launch, contact, ripple, sting, untangle, debris and reflect. We then employed these ideas to analyse two collision experiences, an accidental collision between a person and a drone, and the deliberate design of a robot to play with cats, revealing how real-world collisions involve the complex and ongoing entanglement of soma trajectories. We discuss how viewing collisions as entangled trajectories, or tangles, can be used analytically, as a design approach, and as a lens to broach ethical complexity.


Designing Telepresence Robots to Support Place Attachment

Hu, Yaxin, Zhu, Anjun, Toma, Catalina L., Mutlu, Bilge

arXiv.org Artificial Intelligence

People feel attached to places that are meaningful to them, which psychological research calls "place attachment." Place attachment is associated with self-identity, self-continuity, and psychological well-being. Even small cues, including videos, images, sounds, and scents, can facilitate feelings of connection and belonging to a place. Telepresence robots that allow people to see, hear, and interact with a remote place have the potential to establish and maintain a connection with places and support place attachment. In this paper, we explore the design space of robotic telepresence to promote place attachment, including how users might be guided in a remote place and whether they experience the environment individually or with others. We prototyped a telepresence robot that allows one or more remote users to visit a place and be guided by a local human guide or a conversational agent. Participants were 38 university alumni who visited their alma mater via the telepresence robot. Our findings uncovered four distinct user personas in the remote experience and highlighted the need for social participation to enhance place attachment. We generated design implications for future telepresence robot design to support people's connections with places of personal significance.


Teledrive: An Embodied AI based Telepresence System

Banerjee, Snehasis, Paul, Sayan, Roychoudhury, Ruddradev, Bhattacharya, Abhijan, Sarkar, Chayan, Sau, Ashis, Pramanick, Pradip, Bhowmick, Brojeshwar

arXiv.org Artificial Intelligence

This article presents Teledrive, a telepresence robotic system with embodied AI features that empowers an operator to navigate the telerobot in any unknown remote place with minimal human intervention. We conceive Teledrive in the context of democratizing remote care-giving for elderly citizens as well as for isolated patients, affected by contagious diseases. In particular, this paper focuses on the problem of navigating to a rough target area (like bedroom or kitchen) rather than pre-specified point destinations. This ushers in a unique AreaGoal based navigation feature, which has not been explored in depth in the contemporary solutions. Further, we describe an edge computing-based software system built on a WebRTC-based communication framework to realize the aforementioned scheme through an easy-to-use speech-based human-robot interaction. Moreover, to enhance the ease of operation for the remote caregiver, we incorporate a person following feature, whereby a robot follows a person on the move in its premises as directed by the operator. Moreover, the system presented is loosely coupled with specific robot hardware, unlike the existing solutions. We have evaluated the efficacy of the proposed system through baseline experiments, user study, and real-life deployment.


"This really lets us see the entire world:" Designing a conversational telepresence robot for homebound older adults

Hu, Yaxin, Stegner, Laura, Kotturi, Yasmine, Zhang, Caroline, Peng, Yi-Hao, Huq, Faria, Zhao, Yuhang, Bigham, Jeffrey P., Mutlu, Bilge

arXiv.org Artificial Intelligence

In this paper, we explore the design and use of conversational telepresence robots to help homebound older adults interact with the external world. An initial needfinding study (N=8) using video vignettes revealed older adults' experiential needs for robot-mediated remote experiences such as exploration, reminiscence and social participation. We then designed a prototype system to support these goals and conducted a technology probe study (N=11) to garner a deeper understanding of user preferences for remote experiences. The study revealed user interactive patterns in each desired experience, highlighting the need of robot guidance, social engagements with the robot and the remote bystanders. Our work identifies a novel design space where conversational telepresence robots can be used to foster meaningful interactions in the remote physical environment. We offer design insights into the robot's proactive role in providing guidance and using dialogue to create personalized, contextualized and meaningful experiences.


#IROS2023: A glimpse into the next generation of robotics

Robohub

The 2023 EEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023) kicks off today at the Huntington Place in Detroit, Michigan. This year's theme, "The Next Generation of Robotics," is a call to the young and senior researchers to create a forum where the past, present, and future of robotics converge. The program of IROS 2023 is a blend of theoretical insights and practical demonstrations, designed to foster a culture of innovation and collaboration. Among the highlights are the plenary and keynote talks by eminent personalities in the field of robotics. On the plenary front, Marcie O'Malley from Rice University will delve into the realm of robots that teach and learn with a human touch.


Can we hear physical and social space together through prosody?

Davat, Ambre, Aubergé, Véronique, Feng, Gang

arXiv.org Artificial Intelligence

When human listeners try to guess the spatial position of a speech source, they are influenced by the speaker's production level, regardless of the intensity level reaching their ears. Because the perception of distance is a very difficult task, they rely on their own experience, which tells them that a whispering talker is close to them, and that a shouting talker is far away. This study aims to test if similar results could be obtained for prosodic variations produced by a human speaker in an everyday life environment. It consists in a localization task, during which blindfolded subjects had to estimate the incoming voice direction, speaker orientation and distance of a trained female speaker, who uttered single words, following instructions concerning intensity and social-affect to be performed. This protocol was implemented in two experiments. First, a complex pretext task was used in order to distract the subjects from the strange behavior of the speaker. On the contrary, during the second experiment, the subjects were fully aware of the prosodic variations, which allowed them to adapt their perception. Results show the importance of the pretext task, and suggest that the perception of the speaker's orientation can be influenced by voice intensity.


Robots and the Future - Modern Diplomacy

#artificialintelligence

We are living in the era of "Pervasive Robotics," where robots will be merged into the fabric of day-to-day life as smartphones are today, accomplishing many specialized tasks and often working side-by-side with humans. The robotic revolution will create a future that is more vivid and vibrant than the present. By 2022, robots will have gained such traction that they will be visible on battlefields as military robots, drones, driverless cars, and telepresence robots. Military robots are in the field today. Drones are in the skies, driverless cars are driving on the roads, and telepresence robots are allowing people halfway around the world to see each other over the Internet.